Map Estimators for Bayesian Nonparametrics

Map Estimators for Bayesian Nonparametrics

M. Dashti, K. J. H. Law, A. M. Stuart and J. Voss.  Map Estimators for Bayesian Nonparametrics.  Inverse Problems, 29 095017 (2013).
M. Dashti, K. J. H. Law, A. M. Stuart and J. Voss
Probability, Bayesian non-parametrics, Inverse problems
2013

We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map G applied to u .

We adopt a Bayesian approach to the problem and work in a setting where

the prior measure is specified as a Gaussian random field μ 0 . We work under

a natural set of conditions on the likelihood which implies the existence of a

well-posed posterior measure, μ y . Under these conditions, we show that the

maximum a posteriori (MAP) estimator is well defined as the minimizer of

an Onsager–Machlup functional defined on the Cameron–Martin space of the

prior; thus, we link a problem in probability with a problem in the calculus of

variations. We then consider the case where the observational noise vanishes

and establish a form of Bayesian posterior consistency for the MAP estimator.

We also prove a similar result for the case where the observation of G ( u ) can

be repeated as many times as desired with independent identically distributed

noise. The theory is illustrated with examples from an inverse problem for

the

Navier–Stokes equation, motivated by problems arising in weather forecasting,

and from the theory of conditioned diffusions, motivated by problems arising

in molecular dynamics


doi:10.1088/0266-5611/29/9/095017